Goto

Collaborating Authors

 own chip


Arm looks to launch its own chip after landing Meta contract

The Guardian

The British semiconductor designer Arm is reportedly planning to launch its own chip this year, after landing Meta as one of its first customers. The move represents a major overhaul of the SoftBank-owned group's business model of licensing its chip blueprints to the likes of Apple and Nvidia. Rene Haas, Arm's chief executive, is set to unveil the first in-house chip as early as this summer, according to a report in the Financial Times citing people familiar with the plans. Get set for the working day – we'll point you to all the business news and analysis you need every morning More than 300bn chips based on Arm designs have been shipped since the company was founded in 1990, with almost all the world's smartphones being based on Arm technology. Moving from designing chips to making its own complete processor could also put Arm into competition with some of its biggest customers in the 500bn semiconductor industry.


OpenAI is reportedly considering making its own chips

Engadget

ChatGPT might be powered by homegrown chips in the future, if OpenAI does indeed decide to make its own. According to Reuters, the company is currently exploring the possibility of making its own artificial intelligence chips and has even evaluated a potential acquisition. OpenAI CEO Sam Altman previously blamed GPU shortages for users' concerns regarding the company API's speed and reliability, so he reportedly made acquiring more AI chips a priority. In addition to being able to address GPU shortages, OpenAI using its own chips could make costs associated with running its products more manageable. Based on an analysis by Stacy Rasgon from Bernstein Research, each ChatGPT query costs the company around 4 cents.


U.S. Chamber of Commerce calls for AI regulation

#artificialintelligence

WASHINGTON, March 9 (Reuters) - The U.S. Chamber of Commerce on Thursday called for regulation of artificial intelligence technology to ensure it does not hurt growth or become a national security risk, a departure from the business lobbying group's typical anti-regulatory stance. While there is little in terms of proposed legislation for AI, the fast-growing artificial intelligence program ChatGPT that has drawn praise for its ability to write answers quickly to a wide range of queries has raised U.S. lawmakers' concerns about its impact on national security and education. The Chamber report argues policymakers and business leaders must quickly ramp up their efforts to establish a "risk-based regulatory framework" that will ensure AI is deployed responsibly. It added that AI is projected to add $13 trillion to global economic growth by 2030 and that it has made important contributions such as easing hospital nursing shortages and mapping wildfires to speed emergency management officials' response. The report emphasized the need to be ready for the technology's looming ubiquity and potential dangers.


Chipmaker Nvidia Launches New System For Autonomous Driving - AI Summary

#artificialintelligence

Sept 20 (Reuters) – Chip giant Nvidia Corp (NVDA.O) on Tuesday unveiled its new computing platform called DRIVE Thor that would centralize autonomous and assisted driving as well as other digital functions including in-car entertainment. Danny Shapiro, head of Nvidia's automotive business, said DRIVE Thor would be able to replace numerous chips and cables in the car and bring down the overall system cost, although he did not give specific numbers on savings. Some automakers have begun work on designing their own chips to gain more control and cut costs. General Motor's (GM.N) autonomous driving unit Cruise last week said it had developed its own chips to be deployed by 2025. "There's a lot of companies doing great work, doing things that will benefit mankind and we want to support them," Shapiro said.


GM's Cruise is making its own chips for self-driving vehicles to save on costs

Engadget

GM's Cruise division doesn't want to rely on third-party manufacturers for the chips powering its autonomous vehicles -- so, it's making its own. Based on what Carl Jenkins, the company's VP for Hardware Engineering, told Reuters, the main motivator for the switch is the lofty costs associated with paying for other companies' chips. "Two years ago, we were paying a lot of money for a GPU from a famous vendor," Jenkins told the news organization, referring to NVIDIA. He explained that Cruise couldn't negotiate because it wasn't mass manufacturing autonomous vehicles just yet. Its technology is still in its experimental stages, and while it recently became the first company to secure permission to charge for driverless rides, its operations remain limited.


How Apple's Monster M1 Ultra Chip Keeps Moore's Law Alive

WIRED

For practical purposes, the M1 Ultra acts like a single, impossibly large slice of silicon that does it all. Apple's most powerful chip to date has 114 billion transistors packed into over a hundred processing cores dedicated to logic, graphics, and artificial intelligence, all of it connected to 128 gigabytes of shared memory. But the M1 Ultra is in fact a Frankenstein's monster, consisting of two identical M1 Max chips bolted together using a silicon interface that serves as a bridge. This clever design makes it seem as if the conjoined chips are in fact just one larger whole. As it becomes more difficult to shrink transistors in size, and impractical to make individual chips much bigger, chipmakers are beginning to stitch components together to boost processing power.


Why Tesla Is Designing Chips to Train Its Self-Driving Tech

WIRED

Now, it's also the latest company to seek an edge in artificial intelligence by making its own silicon chips. At a promotional event last month, Tesla revealed details of a custom AI chip called D1 for training the machine-learning algorithm behind its Autopilot self-driving system. The event focused on Tesla's AI work and featured a dancing human posing as a humanoid robot the company intends to build. Tesla is the latest nontraditional chipmaker to design its own silicon. As AI becomes more important and costly to deploy, other companies that are heavily invested in the technology--including Google, Amazon, and Microsoft--also now design their own chips.


Google is launching its own chip(s) 🌟

#artificialintelligence

Remember, few months back how Apple's customised M-Series chips took the market by storm? Well Google is planning to do something similar here. They are building their own set of chips for their new device Pixel 6 & 6 pro, which will be launched later this year . Previously, the company used to integrate Qualcomm chips as their core processor. But now they have decided to go with their custom chips, which they named'Tensor'. The core reason for these new set of chips is to optimise their phone's performance by implementing Google's customised machine learning models .


Google To Build Its Own Chip For New Pixel Smartphone

International Business Times

Google on Monday unveiled a new flagship Pixel smartphone powered by its first mobile chip to put artificial intelligence in people's hands. Pixel 6 models set for release later this year, with superfast 5G wireles capability, will debut Google's own Tensor chip crafted along the lines of processors it made for data centers to enable computers to think more like people do. "It's basically a mobile system on a chip designed around artificial intelligence," Google devices senior vice president Rick Osterloh said during a briefing at the company's headquarters in Silicon Valley. "We're really excited about it. We're setting the stage to really grow the business."


Amazon shifts some Alexa and Rekognition computing to its own Inferentia chip

#artificialintelligence

Amazon.com on Thursday said it shifted part of the computing for its Alexa voice assistant to its own custom-designed chips, aiming to make the work faster and cheaper while moving it away from chips supplied by Nvidia. When users of devices such as Amazon's Echo line of smart speakers ask the voice assistant a question, the query is sent to one of Amazon's data centers for several steps of processing. When Amazon's computers spit out an answer, that reply is in a text format that must be translated into audible speech for the voice assistant. Amazon previously handled that computing using chips from Nvidia but now the "majority" of it will happen using its own Inferentia computing chip. First announced in 2018, the Amazon chip is custom designed to speed up large volumes of machine learning tasks such as translating text to speech or recognizing images.